Outlier (statistics)
   HOME

TheInfoList



OR:

In statistics, an outlier is a
data point In statistics, a unit of observation is the unit described by the data that one analyzes. A study may treat groups as a unit of observation with a country as the unit of analysis, drawing conclusions on group characteristics from data collected at ...
that differs significantly from other observations. An outlier may be due to a variability in the measurement, an indication of novel data, or it may be the result of experimental error; the latter are sometimes excluded from the
data set A data set (or dataset) is a collection of data. In the case of tabular data, a data set corresponds to one or more database tables, where every column of a table represents a particular variable, and each row corresponds to a given record of the ...
. An outlier can be an indication of exciting possibility, but can also cause serious problems in statistical analyses. Outliers can occur by chance in any distribution, but they can indicate novel behaviour or structures in the data-set,
measurement error Observational error (or measurement error) is the difference between a measured value of a quantity and its true value.Dodge, Y. (2003) ''The Oxford Dictionary of Statistical Terms'', OUP. In statistics, an error is not necessarily a "mistake ...
, or that the population has a
heavy-tailed distribution In probability theory, heavy-tailed distributions are probability distributions whose tails are not exponentially bounded: that is, they have heavier tails than the exponential distribution. In many applications it is the right tail of the distr ...
. In the case of measurement error, one wishes to discard them or use statistics that are
robust Robustness is the property of being strong and healthy in constitution. When it is transposed into a system, it refers to the ability of tolerating perturbations that might affect the system’s functional body. In the same line ''robustness'' ca ...
to outliers, while in the case of heavy-tailed distributions, they indicate that the distribution has high
skewness In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined. For a unimodal ...
and that one should be very cautious in using tools or intuitions that assume a
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
. A frequent cause of outliers is a mixture of two distributions, which may be two distinct sub-populations, or may indicate 'correct trial' versus 'measurement error'; this is modeled by a
mixture model In statistics, a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observatio ...
. In most larger samplings of data, some data points will be further away from the
sample mean The sample mean (or "empirical mean") and the sample covariance are statistics computed from a sample of data on one or more random variables. The sample mean is the average value (or mean value) of a sample of numbers taken from a larger popu ...
than what is deemed reasonable. This can be due to incidental systematic error or flaws in the
theory A theory is a rational type of abstract thinking about a phenomenon, or the results of such thinking. The process of contemplative and rational thinking is often associated with such processes as observational study or research. Theories may be ...
that generated an assumed family of probability distributions, or it may be that some observations are far from the center of the data. Outlier points can therefore indicate faulty data, erroneous procedures, or areas where a certain theory might not be valid. However, in large samples, a small number of outliers is to be expected (and not due to any anomalous condition). Outliers, being the most extreme observations, may include the
sample maximum In statistics, the sample maximum and sample minimum, also called the largest observation and smallest observation, are the values of the greatest and least elements of a sample. They are basic summary statistics, used in descriptive statistic ...
or
sample minimum In statistics, the sample maximum and sample minimum, also called the largest observation and smallest observation, are the values of the greatest and least elements of a sample. They are basic summary statistics, used in descriptive statistic ...
, or both, depending on whether they are extremely high or low. However, the sample maximum and minimum are not always outliers because they may not be unusually far from other observations. Naive interpretation of statistics derived from data sets that include outliers may be misleading. For example, if one is calculating the
average In ordinary language, an average is a single number taken as representative of a list of numbers, usually the sum of the numbers divided by how many numbers are in the list (the arithmetic mean). For example, the average of the numbers 2, 3, 4, 7 ...
temperature of 10 objects in a room, and nine of them are between 20 and 25
degrees Celsius The degree Celsius is the unit of temperature on the Celsius scale (originally known as the centigrade scale outside Sweden), one of two temperature scales used in the International System of Units (SI), the other being the Kelvin scale. The ...
, but an oven is at 175 °C, the median of the data will be between 20 and 25 °C but the mean temperature will be between 35.5 and 40 °C. In this case, the median better reflects the temperature of a randomly sampled object (but not the temperature in the room) than the mean; naively interpreting the mean as "a typical sample", equivalent to the median, is incorrect. As illustrated in this case, outliers may indicate data points that belong to a different
population Population typically refers to the number of people in a single area, whether it be a city or town, region, country, continent, or the world. Governments typically quantify the size of the resident population within their jurisdiction using a ...
than the rest of the sample set.
Estimator In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguished. For example, the ...
s capable of coping with outliers are said to be robust: the median is a robust statistic of
central tendency In statistics, a central tendency (or measure of central tendency) is a central or typical value for a probability distribution.Weisberg H.F (1992) ''Central Tendency and Variability'', Sage University Paper Series on Quantitative Applications in ...
, while the mean is not. However, the mean is generally a more precise estimator.


Occurrence and causes

In the case of normally distributed data, the
three sigma rule 3 is a number, numeral, and glyph. 3, three, or III may also refer to: * AD 3, the third year of the AD era * 3 BC, the third year before the AD era * March, the third month Books * '' Three of Them'' (Russian: ', literally, "three"), a 190 ...
means that roughly 1 in 22 observations will differ by twice the standard deviation or more from the mean, and 1 in 370 will deviate by three times the standard deviation. In a sample of 1000 observations, the presence of up to five observations deviating from the mean by more than three times the standard deviation is within the range of what can be expected, being less than twice the expected number and hence within 1 standard deviation of the expected number – see
Poisson distribution In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known co ...
– and not indicate an anomaly. If the sample size is only 100, however, just three such outliers are already reason for concern, being more than 11 times the expected number. In general, if the nature of the population distribution is known ''a priori'', it is possible to test if the number of outliers deviate significantly from what can be expected: for a given cutoff (so samples fall beyond the cutoff with probability ''p'') of a given distribution, the number of outliers will follow a binomial distribution with parameter ''p'', which can generally be well-approximated by the
Poisson distribution In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known co ...
with λ = ''pn''. Thus if one takes a normal distribution with cutoff 3 standard deviations from the mean, ''p'' is approximately 0.3%, and thus for 1000 trials one can approximate the number of samples whose deviation exceeds 3 sigmas by a Poisson distribution with λ = 3.


Causes

Outliers can have many anomalous causes. A physical apparatus for taking measurements may have suffered a transient malfunction. There may have been an error in data transmission or transcription. Outliers arise due to changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. A sample may have been contaminated with elements from outside the population being examined. Alternatively, an outlier could be the result of a flaw in the assumed theory, calling for further investigation by the researcher. Additionally, the pathological appearance of outliers of a certain form appears in a variety of datasets, indicating that the causative mechanism for the data might differ at the extreme end (
King effect In statistics, economics, and econophysics, the king effect is the phenomenon in which the top one or two members of a ranked set show up as clear outliers. These top one or two members are unexpectedly large because they do not conform to the ...
).


Definitions and detection

There is no rigid mathematical definition of what constitutes an outlier; determining whether or not an observation is an outlier is ultimately a subjective exercise. There are various methods of outlier detection, some of which are treated as synonymous with novelty detection. Some are graphical such as
normal probability plot The normal probability plot is a graphical technique to identify substantive departures from normality. This includes identifying outliers, skewness, kurtosis, a need for transformations, and mixtures. Normal probability plots are made of raw ...
s. Others are model-based.
Box plot In descriptive statistics, a box plot or boxplot is a method for graphically demonstrating the locality, spread and skewness groups of numerical data through their quartiles. In addition to the box on a box plot, there can be lines (which are ca ...
s are a hybrid. Model-based methods which are commonly used for identification assume that the data are from a normal distribution, and identify observations which are deemed "unlikely" based on mean and standard deviation: *
Chauvenet's criterion In statistical theory, Chauvenet's criterion (named for William Chauvenet) is a means of assessing whether one piece of experimental data — an outlier — from a set of observations, is likely to be spurious. Derivation The idea behind ...
* Grubbs's test for outliers * Dixon's ''Q'' test * ASTM E178: Standard Practice for Dealing With Outlying Observations * Mahalanobis distance and
leverage Leverage or leveraged may refer to: *Leverage (mechanics), mechanical advantage achieved by using a lever * ''Leverage'' (album), a 2012 album by Lyriel *Leverage (dance), a type of dance connection *Leverage (finance), using given resources to ...
are often used to detect outliers, especially in the development of linear regression models. * Subspace and correlation based techniques for high-dimensional numerical data


Peirce's criterion

It is proposed to determine in a series of m observations the limit of error, beyond which all observations involving so great an error may be rejected, provided there are as many as n such observations. The principle upon which it is proposed to solve this problem is, that the proposed observations should be rejected when the probability of the system of errors obtained by retaining them is less than that of the system of errors obtained by their rejection multiplied by the probability of making so many, and no more, abnormal observations. (Quoted in the editorial note on page 516 to Peirce (1982 edition) from ''A Manual of Astronomy'' 2:558 by Chauvenet.)


Tukey's fences

Other methods flag observations based on measures such as the
interquartile range In descriptive statistics, the interquartile range (IQR) is a measure of statistical dispersion, which is the spread of the data. The IQR may also be called the midspread, middle 50%, fourth spread, or H‑spread. It is defined as the difference ...
. For example, if Q_1 and Q_3 are the lower and upper quartiles respectively, then one could define an outlier to be any observation outside the range: : \big Q_1 - k (Q_3 - Q_1 ) , Q_3 + k (Q_3 - Q_1 ) \big/math> for some nonnegative constant k. John Tukey proposed this test, where k=1.5 indicates an "outlier", and k=3 indicates data that is "far out".


In anomaly detection

In various domains such as, but not limited to, statistics,
signal processing Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing ''signals'', such as sound, images, and scientific measurements. Signal processing techniques are used to optimize transmissions, ...
, finance,
econometrics Econometrics is the application of statistical methods to economic data in order to give empirical content to economic relationships. M. Hashem Pesaran (1987). "Econometrics," '' The New Palgrave: A Dictionary of Economics'', v. 2, p. 8 p. 8 ...
,
manufacturing Manufacturing is the creation or production of goods with the help of equipment, labor, machines, tools, and chemical or biological processing or formulation. It is the essence of secondary sector of the economy. The term may refer to ...
, networking and data mining, the task of ''anomaly detection'' may take other approaches. Some of these may be distance-based and density-based such as
Local Outlier Factor In anomaly detection, the local outlier factor (LOF) is an algorithm proposed by Markus M. Breunig, Hans-Peter Kriegel, Raymond T. Ng and Jörg Sander in 2000 for finding anomalous data points by measuring the local deviation of a given data poin ...
(LOF). Some approaches may use the distance to the k-nearest neighbors to label observations as outliers or non-outliers.


Modified Thompson Tau test

The modified Thompson Tau test is a method used to determine if an outlier exists in a data set. The strength of this method lies in the fact that it takes into account a data set's standard deviation, average and provides a statistically determined rejection zone; thus providing an objective method to determine if a data point is an outlier. How it works: First, a data set's average is determined. Next the absolute deviation between each data point and the average are determined. Thirdly, a rejection region is determined using the formula: :\text \frac ; where \scriptstyle is the critical value from the Student distribution with ''n''-2 degrees of freedom, ''n'' is the sample size, and s is the sample standard deviation. To determine if a value is an outlier: Calculate \scriptstyle \delta = , (X - mean(X)) / s, . If ''δ'' > Rejection Region, the data point is an outlier. If ''δ'' ≤ Rejection Region, the data point is not an outlier. The modified Thompson Tau test is used to find one outlier at a time (largest value of ''δ'' is removed if it is an outlier). Meaning, if a data point is found to be an outlier, it is removed from the data set and the test is applied again with a new average and rejection region. This process is continued until no outliers remain in a data set. Some work has also examined outliers for nominal (or categorical) data. In the context of a set of examples (or instances) in a data set, instance hardness measures the probability that an instance will be misclassified ( 1-p(y, x) where is the assigned class label and represent the input attribute value for an instance in the training set ). Ideally, instance hardness would be calculated by summing over the set of all possible hypotheses : :\beginIH(\langle x, y\rangle) &= \sum_H (1 - p(y, x, h))p(h, t)\\ &= \sum_H p(h, t) - p(y, x, h)p(h, t)\\ &= 1- \sum_H p(y, x, h)p(h, t).\end Practically, this formulation is unfeasible as is potentially infinite and calculating p(h, t) is unknown for many algorithms. Thus, instance hardness can be approximated using a diverse subset L \subset H: :IH_L (\langle x,y\rangle) = 1 - \frac \sum_^ p(y, x, g_j(t, \alpha)) where g_j(t, \alpha) is the hypothesis induced by learning algorithm g_j trained on training set with hyperparameters \alpha. Instance hardness provides a continuous value for determining if an instance is an outlier instance.


Working with outliers

The choice of how to deal with an outlier should depend on the cause. Some estimators are highly sensitive to outliers, notably
estimation of covariance matrices In statistics, sometimes the covariance matrix of a multivariate random variable is not known but has to be estimated. Estimation of covariance matrices then deals with the question of how to approximate the actual covariance matrix on the basis ...
.


Retention

Even when a normal distribution model is appropriate to the data being analyzed, outliers are expected for large sample sizes and should not automatically be discarded if that is the case. The application should use a classification algorithm that is robust to outliers to model data with naturally occurring outlier points.


Exclusion

Deletion of outlier data is a controversial practice frowned upon by many scientists and science instructors; while mathematical criteria provide an objective and quantitative method for data rejection, they do not make the practice more scientifically or methodologically sound, especially in small sets or where a normal distribution cannot be assumed. Rejection of outliers is more acceptable in areas of practice where the underlying model of the process being measured and the usual distribution of measurement error are confidently known. An outlier resulting from an instrument reading error may be excluded but it is desirable that the reading is at least verified. The two common approaches to exclude outliers are truncation (or trimming) and
Winsorising Winsorizing or winsorization is the transformation of statistics by limiting extreme values in the statistical data to reduce the effect of possibly spurious outliers. It is named after the engineer-turned-biostatistician Charles Winsor, Charles P. ...
. Trimming discards the outliers whereas Winsorising replaces the outliers with the nearest "nonsuspect" data. Exclusion can also be a consequence of the measurement process, such as when an experiment is not entirely capable of measuring such extreme values, resulting in
censored Censorship is the suppression of speech, public communication, or other information. This may be done on the basis that such material is considered objectionable, harmful, sensitive, or "inconvenient". Censorship can be conducted by governments ...
data. In regression problems, an alternative approach may be to only exclude points which exhibit a large degree of influence on the estimated coefficients, using a measure such as
Cook's distance In statistics, Cook's distance or Cook's ''D'' is a commonly used estimate of the influence of a data point when performing a least-squares regression analysis. In a practical ordinary least squares analysis, Cook's distance can be used in several ...
. If a data point (or points) is excluded from the data analysis, this should be clearly stated on any subsequent report.


Non-normal distributions

The possibility should be considered that the underlying distribution of the data is not approximately normal, having " fat tails". For instance, when sampling from a
Cauchy distribution The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) fun ...
, the sample variance increases with the sample size, the sample mean fails to converge as the sample size increases, and outliers are expected at far larger rates than for a normal distribution. Even a slight difference in the fatness of the tails can make a large difference in the expected number of extreme values.


Set-membership uncertainties

A set membership approach considers that the uncertainty corresponding to the ''i''th measurement of an unknown random vector ''x'' is represented by a set ''X''i (instead of a probability density function). If no outliers occur, ''x'' should belong to the intersection of all ''X''i's. When outliers occur, this intersection could be empty, and we should relax a small number of the sets ''X''i (as small as possible) in order to avoid any inconsistency. This can be done using the notion of ''q''- relaxed intersection. As illustrated by the figure, the ''q''-relaxed intersection corresponds to the set of all ''x'' which belong to all sets except ''q'' of them. Sets ''X''i that do not intersect the ''q''-relaxed intersection could be suspected to be outliers.


Alternative models

In cases where the cause of the outliers is known, it may be possible to incorporate this effect into the model structure, for example by using a
hierarchical Bayes model Multilevel models (also known as hierarchical linear models, linear mixed-effect model, mixed models, nested data models, random coefficient, random-effects models, random parameter models, or split-plot designs) are statistical models of parame ...
, or a
mixture model In statistics, a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observatio ...
.


See also

*
Anomaly (natural sciences) In the natural sciences, especially in atmospheric and Earth sciences involving applied statistics, an ''anomaly'' is a persisting deviation in a physical quantity from its expected value, e.g., the systematic difference between a measurement an ...
* Novelty detection * Anscombe's quartet *
Data transformation (statistics) In statistics, data transformation is the application of a deterministic mathematical function to each point in a data set—that is, each data point ''zi'' is replaced with the transformed value ''yi'' = ''f''(''zi''), where ''f'' is a functi ...
*
Extreme value theory Extreme value theory or extreme value analysis (EVA) is a branch of statistics dealing with the extreme deviations from the median of probability distributions. It seeks to assess, from a given ordered sample of a given random variable, the pr ...
* Influential observation * Random sample consensus *
Robust regression In robust statistics, robust regression seeks to overcome some limitations of traditional regression analysis. A regression analysis models the relationship between one or more independent variables and a dependent variable. Standard types of ...
*
Studentized residual In statistics, a studentized residual is the quotient resulting from the division of a residual by an estimate of its standard deviation. It is a form of a Student's ''t''-statistic, with the estimate of error varying between points. This is ...
* Winsorizing


References


External links

* *
Grubbs test
described by NIST manual {{Authority control Statistical charts and diagrams Robust statistics